Goto

Collaborating Authors

 Medford




Decoding the Black Box: Discerning AI Rhetorics About and Through Poetic Prompting

Edgar, P. D., Hall, Alia

arXiv.org Artificial Intelligence

-- Prompt engineering has emerged as a useful way studying the algorithmic tendencies and biases of large language models (LLMs). Meanwhile c reatives and academics have leveraged LLMs to develop creative works and explore the boundaries of their writing capabilities through text - generation and code. This study suggests that creative text prompting, specifically "Poetry Prompt Patterns," may be a useful addition to the prompt engineer's toolbox, and outlines the process by which this approach may be taken. Then, the paper uses poetic prompts to assess three models' descriptions and evaluations of a renowned poet and test the consequences of models' willingness to adapt or rewrite original creative works for presumed audiences. Since the release of public - facing chat - style large language model (LLM) natural language generators (NLGs) like ChatGPT and Claude, public debate has acknowledged their great potential for creativity, as well as the ways in which they can be leveraged to make representations that don't reflect reality.


AdvisingWise: Supporting Academic Advising in Higher Education Settings Through a Human-in-the-Loop Multi-Agent Framework

Jiang, Wendan, Wang, Shiyuan, Eltigani, Hiba, Haroon, Rukhshan, Faisal, Abdullah Bin, Dogar, Fahad

arXiv.org Artificial Intelligence

Academic advising is critical to student success in higher education, yet high student-to-advisor ratios limit advisors' capacity to provide timely support, particularly during peak periods. Recent advances in Large Language Models (LLMs) present opportunities to enhance the advising process. We present AdvisingWise, a multi-agent system that automates time-consuming tasks, such as information retrieval and response drafting, while preserving human oversight. AdvisingWise leverages authoritative institutional resources and adaptively prompts students about their academic backgrounds to generate reliable, personalized responses. All system responses undergo human advisor validation before delivery to students. We evaluate AdvisingWise through a mixed-methods approach: (1) expert evaluation on responses of 20 sample queries, (2) LLM-as-a-judge evaluation of the information retrieval strategy, and (3) a user study with 8 academic advisors to assess the system's practical utility. Our evaluation shows that AdvisingWise produces accurate, personalized responses. Advisors reported increasingly positive perceptions after using AdvisingWise, as their initial concerns about reliability and personalization diminished. We conclude by discussing the implications of human-AI synergy on the practice of academic advising.


Swarms of Large Language Model Agents for Protein Sequence Design with Experimental Validation

Wang, Fiona Y., Lee, Di Sheng, Kaplan, David L., Buehler, Markus J.

arXiv.org Artificial Intelligence

Designing proteins de novo with tailored structural, physicochemical, and functional properties remains a grand challenge in biotechnology, medicine, and materials science, due to the vastness of sequence space and the complex coupling between sequence, structure, and function. Current state-of-the-art generative methods, such as protein language models (PLMs) and diffusion-based architectures, often require extensive fine-tuning, task-specific data, or model reconfiguration to support objective-directed design, thereby limiting their flexibility and scalability. To overcome these limitations, we present a decentralized, agent-based framework inspired by swarm intelligence for de novo protein design. In this approach, multiple large language model (LLM) agents operate in parallel, each assigned to a specific residue position. These agents iteratively propose context-aware mutations by integrating design objectives, local neighborhood interactions, and memory and feedback from previous iterations. This position-wise, decentralized coordination enables emergent design of diverse, well-defined sequences without reliance on motif scaffolds or multiple sequence alignments, validated with experiments on proteins with alpha helix and coil structures. Through analyses of residue conservation, structure-based metrics, and sequence convergence and embeddings, we demonstrate that the framework exhibits emergent behaviors and effective navigation of the protein fitness landscape. Our method achieves efficient, objective-directed designs within a few GPU-hours and operates entirely without fine-tuning or specialized training, offering a generalizable and adaptable solution for protein design. Beyond proteins, the approach lays the groundwork for collective LLM-driven design across biomolecular systems and other scientific discovery tasks.


Excess Risk Bounds for the Bayes Risk using Variational Inference in Latent Gaussian Models

Rishit Sheth, Roni Khardon

Neural Information Processing Systems

We strengthen previous results for variational algorithms by showing that they are competitive with any point-estimate predictor. Unlike previous work, we provide bounds on the risk of the Bayesian predictor and not just the risk of the Gibbs predictor for the same approximate posterior.



Achieving Safe Control Online through Integration of Harmonic Control Lyapunov-Barrier Functions with Unsafe Object-Centric Action Policies

Fawn, Marlow, Scheutz, Matthias

arXiv.org Artificial Intelligence

Open-world environments pose many challenges for autonomous robots as unexpected events or task modulations can make learned robot behavior inapplicable or obsolete. Consider, for example, a robot that has learned to autonomously perform a sorting task on a table top without any human interventions when a human co-worker steps in to help with finishing the task. This change in task environment now requires the robot to avoid colliding with the human whose arms are extended into the robot's work space and are dynamically changing position. Even if the robot has the perceptual capability to detect and track the human's arms and hands, its trained action policy does not provide a way to account for the motion constraints they impose. Or consider a delivery robot in a warehouse that has an optimized policy for traversing indoor spaces when dynamic constraints are imposed on where it can drive (e.g., because parts of the floor are painted).


Quantum Machine Learning via Contrastive Training

Zhukas, Liudmila A., Zhang, Vivian Ni, Miao, Qiang, Wang, Qingfeng, Cetina, Marko, Kim, Jungsang, Carin, Lawrence, Monroe, Christopher

arXiv.org Artificial Intelligence

Quantum machine learning (QML) has attracted growing interest with the rapid parallel advances in large-scale classical machine learning and quantum technologies. Similar to classical machine learning, QML models also face challenges arising from the scarcity of labeled data, particularly as their scale and complexity increase. Here, we introduce self-supervised pretraining of quantum representations that reduces reliance on labeled data by learning invariances from unlabeled examples. We implement this paradigm on a programmable trapped-ion quantum computer, encoding images as quantum states. In situ contrastive pretraining on hardware yields a representation that, when fine-tuned, classifies image families with higher mean test accuracy and lower run-to-run variability than models trained from random initialization. Performance improvement is especially significant in regimes with limited labeled training data. We show that the learned invariances generalize beyond the pretraining image samples. Unlike prior work, our pipeline derives similarity from measured quantum overlaps and executes all training and classification stages on hardware. These results establish a label-efficient route to quantum representation learning, with direct relevance to quantum-native datasets and a clear path to larger classical inputs.